36 research outputs found
Robust Component-based Network Localization with Noisy Range Measurements
Accurate and robust localization is crucial for wireless ad-hoc and sensor
networks. Among the localization techniques, component-based methods advance
themselves for conquering network sparseness and anchor sparseness. But
component-based methods are sensitive to ranging noises, which may cause a huge
accumulated error either in component realization or merging process. This
paper presents three results for robust component-based localization under
ranging noises. (1) For a rigid graph component, a novel method is proposed to
evaluate the graph's possible number of flip ambiguities under noises. In
particular, graph's \emph{MInimal sepaRators that are neaRly cOllineaR
(MIRROR)} is presented as the cause of flip ambiguity, and the number of
MIRRORs indicates the possible number of flip ambiguities under noise. (2) Then
the sensitivity of a graph's local deforming regarding ranging noises is
investigated by perturbation analysis. A novel Ranging Sensitivity Matrix (RSM)
is proposed to estimate the node location perturbations due to ranging noises.
(3) By evaluating component robustness via the flipping and the local deforming
risks, a Robust Component Generation and Realization (RCGR) algorithm is
developed, which generates components based on the robustness metrics. RCGR was
evaluated by simulations, which showed much better noise resistance and
locating accuracy improvements than state-of-the-art of component-based
localization algorithms.Comment: 9 pages, 15 figures, ICCCN 2018, Hangzhou, Chin
Attending Category Disentangled Global Context for Image Classification
In this paper, we propose a general framework for image classification using
the attention mechanism and global context, which could incorporate with
various network architectures to improve their performance. To investigate the
capability of the global context, we compare four mathematical models and
observe the global context encoded in the category disentangled conditional
generative model could give more guidance as "know what is task irrelevant will
also know what is relevant". Based on this observation, we define a novel
Category Disentangled Global Context (CDGC) and devise a deep network to obtain
it. By attending CDGC, the baseline networks could identify the objects of
interest more accurately, thus improving the performance. We apply the
framework to many different network architectures and compare with the
state-of-the-art on four publicly available datasets. Extensive results
validate the effectiveness and superiority of our approach. Code will be made
public upon paper acceptance.Comment: Under revie
When Less is Enough: Positive and Unlabeled Learning Model for Vulnerability Detection
Automated code vulnerability detection has gained increasing attention in
recent years. The deep learning (DL)-based methods, which implicitly learn
vulnerable code patterns, have proven effective in vulnerability detection. The
performance of DL-based methods usually relies on the quantity and quality of
labeled data. However, the current labeled data are generally automatically
collected, such as crawled from human-generated commits, making it hard to
ensure the quality of the labels. Prior studies have demonstrated that the
non-vulnerable code (i.e., negative labels) tends to be unreliable in
commonly-used datasets, while vulnerable code (i.e., positive labels) is more
determined. Considering the large numbers of unlabeled data in practice, it is
necessary and worth exploring to leverage the positive data and large numbers
of unlabeled data for more accurate vulnerability detection.
In this paper, we focus on the Positive and Unlabeled (PU) learning problem
for vulnerability detection and propose a novel model named PILOT, i.e.,
PositIve and unlabeled Learning mOdel for vulnerability deTection. PILOT only
learns from positive and unlabeled data for vulnerability detection. It mainly
contains two modules: (1) A distance-aware label selection module, aiming at
generating pseudo-labels for selected unlabeled data, which involves the
inter-class distance prototype and progressive fine-tuning; (2) A
mixed-supervision representation learning module to further alleviate the
influence of noise and enhance the discrimination of representations.Comment: This paper is accepted by ASE 202
FAT: Feature-Focusing Adversarial Training via Disentanglement of Natural and Perturbed Patterns
Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by
well-designed perturbations. This could lead to disastrous results on critical
applications such as self-driving cars, surveillance security, and medical
diagnosis. At present, adversarial training is one of the most effective
defenses against adversarial examples. However, traditional adversarial
training makes it difficult to achieve a good trade-off between clean accuracy
and robustness since spurious features are still learned by DNNs. The intrinsic
reason is that traditional adversarial training makes it difficult to fully
learn core features from adversarial examples when adversarial noise and clean
examples cannot be disentangled. In this paper, we disentangle the adversarial
examples into natural and perturbed patterns by bit-plane slicing. We assume
the higher bit-planes represent natural patterns and the lower bit-planes
represent perturbed patterns, respectively. We propose a Feature-Focusing
Adversarial Training (FAT), which differs from previous work in that it
enforces the model to focus on the core features from natural patterns and
reduce the impact of spurious features from perturbed patterns. The
experimental results demonstrated that FAT outperforms state-of-the-art
methods in clean accuracy and adversarial robustness
3D highly heterogeneous thermal model of pineal gland in-vitro study for electromagnetic exposure using finite volume method
n this paper, the relationship between electromagnetic power absorption and temperature distributions inside highly heterogeneous biological samples was accurately determinated using finite volume method. An in-vitro study on pineal gland that is responsible for physiological activities was for the first time simulated to illustrate effectiveness of the proposed method
Epidemic Risk Assessment by a Novel Communication Station Based Method
The COVID-19 pandemic has caused serious consequences in the last few months and trying to control it has been the most important objective. With effective prevention and control methods, the epidemic has been gradually under control in some countries and it is essential to ensure safe work resumption in the future. Although some approaches are proposed to measure people's healthy conditions, such as filling health information forms or evaluating people's travel records, they cannot provide a fine-grained assessment of the epidemic risk. In this paper, we propose a novel epidemic risk assessment method based on the granular data collected by the communication stations. We first compute the epidemic risk of these stations in different intervals by combining the number of infected persons and the way they pass through the station. Then, we calculate the personnel risk in different intervals according to the station trajectory of the queried person. This method could assess people's epidemic risk accurately and efficiently. We also conduct extensive simulations and the results verify the effectiveness of the proposed method
A Practical Neighbor Discovery Framework for Wireless Sensor Networks
Neighbor discovery is a crucial operation frequently executed throughout the life cycle of a Wireless Sensor Network (WSN). Various protocols have been proposed to minimize the discovery latency or to prolong the lifetime of sensors. However, none of them have addressed that all the critical concerns stemming from real WSNs, including communication collisions, latency constraints and energy consumption limitations. In this paper, we propose Spear, the first practical neighbor discovery framework to meet all these requirements. Spear offers two new methods to reduce communication collisions, thus boosting the discovery rate of existing neighbor discovery protocols. Spear also takes into consideration latency constraints and facilitate